Unit2 Least Squares, Determinants and Eigenvalues

page link Each component of a vector in $R^n$ indicates a distance along one of the coordinate axes. This practice of dissecting avector into directional components is an important one. In particular, it leads to the "least squares" method of fitting curves to collections of data. This unit also introduces matrix eigenvalues and eigenvectors. Many calculations become simpler when working with a basis of eigenvectors.

The determinant of a matrix is anumber characterizing that matrix. This value is useful for determining whether a matrix is singular, computing its inverse, and more.

L1: Orthogonal Vectors and Subspaces

Vectors are easier to understand when they're described in terms of orthogonal bases. In addition, the Four Fundamental Subspaces are orthogonal to each other in pairs.

If A is a rectangular matrix, Ax = b is often unsolvable. The matrix ATA will help us find a vector x̂ that comes as close as possible to solving Ax = b.

Read Section 4.1 in the textbook.

Exercise

L2: Projections onto Subspaces

We often want to find the line (or plane, or hyperplane) that best fits our data. This amounts to finding the best possible approximation to some unsolvable system of linear equations Ax = b. The algebra of finding these best fit solutions begins with the projection of a vector onto a subspace

Read Section 4.2 in the textbook.

Exercise

L3: Projection Matrices and Least Squares

Linear regression is commonly used to fit a line to a collection of data. The method of least squares can be viewed as finding the projection of a vector. Linear algebra provides a powerful and efficient description of linear regression in terms of the matrix ATA.

Read Section 4.3 in the textbook.

Exercise

L4: Orthogonal Matrices and Gram-Schmidt

Many calculations become simpler when performed using orthonormal vectors or othogonal matrices. In this session, we learn a procedure for converting any basis to an orthonormal one.

Read Section 4.4 in the textbook.

Exercise

L5: Properties of Determinants

The determinant of a matrix is a single number which encodes a lot of information about the matrix. Three simple properties completely describe the determinant. In this lecture we also list seven more properties like detAB = (detA)(detB) that can be derived from the first three.

Read Section 5.1 in the textbook.

Exercise

L6: Determinant Formulas and Cofactors

The determinant of a matrix is a single number which encodes a lot of information about the matrix. Three simple properties completely describe the determinant. In this lecture we also list seven more properties like detAB = (detA)(detB) that can be derived from the first three.

Read Section 5.2 in the textbook.

Exercise

L7: Cramer's Rule, Inverse Matrix and Volume

Now we start to use the determinant. Understanding the cofactor formula allows us to show that A-1 = (1/detA)CT, where C is the matrix of cofactors of A. Combining this formula with the equation x = A-1b gives us Cramer's rule for solving Ax = b. Also, the absolute value of the determinant gives the volume of a box.

Read Section 5.3 in the textbook.

Exercise

L8: Eigenvalues and Eigenvectors

If the product Ax points in the same direction as the vector x, we say that x is an eigenvector of A. Eigenvalues and eigenvectors describe what happens when a matrix is multiplied by a vector. In this session we learn how to find the eigenvalues and eigenvectors of a matrix.

Read Section 6.1 6.2 in the textbook.

Exercise

L9: Diagonalization and Powers of A

If A has n independent eigenvectors, we can write A = SΛS−1, where Λ is a diagonal matrix containing the eigenvalues of A. This allows us to easily compute powers of A which in turn allows us to solve difference equations $u_k+1 = A*u_k.$

Read Section 6.2 in the textbook.

Exercise

L10: Differential Equations and exp(At)

We can copy Taylor's series for ex to define eAt for a matrix A. If A is diagonalizable, we can use Λ to find the exact value of eAt. This allows us to solve systems of differential equations du / dt = Au the same way we solved equations like dy / dt = ky.

Read Section 6.3 in the textbook.

Exercise

L11: Markov Matrices; Fourier Series

Like differential equations, Markov matrices describe changes over time. Once again, the eigenvalues and eigenvectors describe the long term behavior of the system. In this session we also learn about Fourier series, which describe periodic functions as points in an infinite dimensional vector space.

Read Section 8.3, 8.5 in the textbook.

Exercise

Exam review

Unit II covered a lot of material, the heart of this course. To go beyond the explanations in the lecture video, try reading the lecture summary which outlines orthogonality and least squares, determinants, and eigenvalues.